universal law
A Universal Law of Robustness via Isoperimetry
Classically, data interpolation with a parametrized model class is possible as long as the number of parameters is larger than the number of equations to be satisfied. A puzzling phenomenon in the current practice of deep learning is that models are trained with many more parameters than what this classical theory would suggest. We propose a theoretical explanation for this phenomenon. We prove that for a broad class of data distributions and model classes, overparametrization is {\em necessary} if one wants to interpolate the data {\em smoothly}. Namely we show that {\em smooth} interpolation requires $d$ times more parameters than mere interpolation, where $d$ is the ambient data dimension. We prove this universal law of robustness for any smoothly parametrized function class with polynomial size weights, and any covariate distribution verifying isoperimetry. In the case of two-layers neural networks and Gaussian covariates, this law was conjectured in prior work by Bubeck, Li and Nagaraj. We also give an interpretation of our result as an improved generalization bound for model classes consisting of smooth functions.
A Universal Law of Robustness via Isoperimetry
Classically, data interpolation with a parametrized model class is possible as long as the number of parameters is larger than the number of equations to be satisfied. A puzzling phenomenon in the current practice of deep learning is that models are trained with many more parameters than what this classical theory would suggest. We propose a theoretical explanation for this phenomenon. We prove that for a broad class of data distributions and model classes, overparametrization is {\em necessary} if one wants to interpolate the data {\em smoothly}. Namely we show that {\em smooth} interpolation requires d times more parameters than mere interpolation, where d is the ambient data dimension.
Automated Kantian Ethics: A Faithful Implementation
As we grant artificial intelligence increasing power and independence in contexts like healthcare, policing, and driving, AI faces moral dilemmas but lacks the tools to solve them. Warnings from regulators, philosophers, and computer scientists about the dangers of unethical artificial intelligence have spurred interest in automated ethics--i.e., the development of machines that can perform ethical reasoning. However, prior work in automated ethics rarely engages with philosophical literature. Philosophers have spent centuries debating moral dilemmas so automated ethics will be most nuanced, consistent, and reliable when it draws on philosophical literature. In this paper, I present an implementation of automated Kantian ethics that is faithful to the Kantian philosophical tradition. I formalize Kant's categorical imperative in Dyadic Deontic Logic, implement this formalization in the Isabelle/HOL theorem prover, and develop a testing framework to evaluate how well my implementation coheres with expected properties of Kantian ethic. My system is an early step towards philosophically mature ethical AI agents and it can make nuanced judgements in complex ethical dilemmas because it is grounded in philosophical literature. Because I use an interactive theorem prover, my system's judgements are explainable.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.05)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Europe > Switzerland (0.04)
The State of Artificial Intelligence in 2018: A Good Old Fashioned Re…
Sunny Mishra, RPA CoE - Consulting Architect at ExxonMobil at ExxonMobil Great deck, but I have some minor issues. As a universal law, we cannot teach machines more intelligence than what we have at this point in time. So, instead of calling "Artificial Intelligence", we should drop the word "Artificial" and the word "Intelligence". I do not believe that there is any "artificiality" to any intelligence. First of all, Intelligence is gained/learned from us following "Rules" and Data" that we have associated with since we are born. This learning process dictates our outcome and that is fixed. There is no such thing as "Gut Feeling". It does not exist....me simply make it up to make any point heard across. So no matter how big or complex machines we build, it will only learn to behave by the "Rules" and the associated "Data", which always has a "fixed" outcome or result, same as ours. By laws of universe, without evolving, we would have remained as cave dwellers. So, every day of our life, we observe new rules and results which evolves us to the next level. But, If for example, I am locked up in a dark room, isolated from observing any new rules or data or results, I will be at the same level of intelligence as the day I get locked in. Similarly, if we cannot generate any new intelligence in isolation, we cannot feed the robots any new rules and hence they will remain at a certain level of intelligence for ever. What I am trying to say here is "...AI will never be more intelligent than its creator...". 3 months ago Reply Are you sure you want to Yes No Your message goes here Great deck, but I have some minor issues. As a universal law, we cannot teach machines more intelligence than what we have at this point in time. So, instead of calling "Artificial Intelligence", we should drop the word "Artificial" and the word "Intelligence". I do not believe that there is any "artificiality" to any intelligence. First of all, Intelligence is gained/learned from us following "Rules" and Data" that we have associated with since we are born.
The State of Artificial Intelligence in 2018: A Good Old Fashioned Re…
Sunny Mishra, RPA CoE - Consulting Architect at ExxonMobil at ExxonMobil Great deck, but I have some minor issues. As a universal law, we cannot teach machines more intelligence than what we have at this point in time. So, instead of calling "Artificial Intelligence", we should drop the word "Artificial" and the word "Intelligence". I do not believe that there is any "artificiality" to any intelligence. First of all, Intelligence is gained/learned from us following "Rules" and Data" that we have associated with since we are born.
Efficient coding explains the universal law of generalization in human perception
Perceptual generalization and discrimination are fundamental cognitive abilities. For example, if a bird eats a poisonous butterfly, it will learn to avoid preying on that species again by generalizing its past experience to new perceptual stimuli. In cognitive science, the "universal law of generalization" seeks to explain this ability and states that generalization between stimuli will follow an exponential function of their distance in "psychological space." Here, I challenge existing theoretical explanations for the universal law and offer an alternative account based on the principle of efficient coding. I show that the universal law emerges inevitably from any information processing system (whether biological or artificial) that minimizes the cost of perceptual error subject to constraints on the ability to process or transmit information.
Column: Why we'll always fear monsters
Fear continues to saturate our lives: fear of nuclear destruction, fear of climate change, fear of the subversive, and fear of foreigners. But a recent Rolling Stone article about our "age of fear" notes that most Americans are living "in the safest place at the safest time in human history." Around the globe, household wealth, longevity and education are on the rise, while violent crime and extreme poverty are down. In the U.S., life expectancy is higher than ever, our air is the cleanest it's been in a decade and, despite a slight uptick last year, violent crime has been trending down since 1991. Emerging technology and media could play a role.